91 research outputs found

    New Insights into History Matching via Sequential Monte Carlo

    Get PDF
    The aim of the history matching method is to locate non-implausible regions of the parameter space of complex deterministic or stochastic models by matching model outputs with data. It does this via a series of waves where at each wave an emulator is fitted to a small number of training samples. An implausibility measure is defined which takes into account the closeness of simulated and observed outputs as well as emulator uncertainty. As the waves progress, the emulator becomes more accurate so that training samples are more concentrated on promising regions of the space and poorer parts of the space are rejected with more confidence. Whilst history matching has proved to be useful, existing implementations are not fully automated and some ad-hoc choices are made during the process, which involves user intervention and is time consuming. This occurs especially when the non-implausible region becomes small and it is difficult to sample this space uniformly to generate new training points. In this article we develop a sequential Monte Carlo (SMC) algorithm for implementation which is semi-automated. Our novel SMC approach reveals that the history matching method yields a non-implausible distribution that can be multi-modal, highly irregular and very difficult to sample uniformly. Our SMC approach offers a much more reliable sampling of the non-implausible space, which requires additional computation compared to other approaches used in the literature

    Unbiased and Consistent Nested Sampling via Sequential Monte Carlo

    Full text link
    We introduce a new class of sequential Monte Carlo methods called Nested Sampling via Sequential Monte Carlo (NS-SMC), which reframes the Nested Sampling method of Skilling (2006) in terms of sequential Monte Carlo techniques. This new framework allows convergence results to be obtained in the setting when Markov chain Monte Carlo (MCMC) is used to produce new samples. An additional benefit is that marginal likelihood estimates are unbiased. In contrast to NS, the analysis of NS-SMC does not require the (unrealistic) assumption that the simulated samples be independent. As the original NS algorithm is a special case of NS-SMC, this provides insights as to why NS seems to produce accurate estimates despite a typical violation of its assumptions. For applications of NS-SMC, we give advice on tuning MCMC kernels in an automated manner via a preliminary pilot run, and present a new method for appropriately choosing the number of MCMC repeats at each iteration. Finally, a numerical study is conducted where the performance of NS-SMC and temperature-annealed SMC is compared on several challenging and realistic problems. MATLAB code for our experiments is made available at https://github.com/LeahPrice/SMC-NS .Comment: 45 pages, some minor typographical errors fixed since last versio

    Pre-processing for approximate Bayesian computation in image analysis

    Get PDF
    Most of the existing algorithms for approximate Bayesian computation (ABC) assume that it is feasible to simulate pseudo-data from the model at each iteration. However, the computational cost of these simulations can be prohibitive for high dimensional data. An important example is the Potts model, which is commonly used in image analysis. Images encountered in real world applications can have millions of pixels, therefore scalability is a major concern. We apply ABC with a synthetic likelihood to the hidden Potts model with additive Gaussian noise. Using a pre-processing step, we fit a binding function to model the relationship between the model parameters and the synthetic likelihood parameters. Our numerical experiments demonstrate that the precomputed binding function dramatically improves the scalability of ABC, reducing the average runtime required for model fitting from 71 hours to only 7 minutes. We also illustrate the method by estimating the smoothing parameter for remotely sensed satellite imagery. Without precomputation, Bayesian inference is impractical for datasets of that scale.Comment: 5th IMS-ISBA joint meeting (MCMSki IV

    An approach for finding fully Bayesian optimal designs using normal-based approximations to loss functions

    Get PDF
    The generation of decision-theoretic Bayesian optimal designs is complicated by the significant computational challenge of minimising an analytically intractable expected loss function over a, potentially, high-dimensional design space. A new general approach for approximately finding Bayesian optimal designs is proposed which uses computationally efficient normal-based approximations to posterior summaries to aid in approximating the expected loss. This new approach is demonstrated on illustrative, yet challenging, examples including hierarchical models for blocked experiments, and experimental aims of parameter estimation and model discrimination. Where possible, the results of the proposed methodology are compared, both in terms of performance and computing time, to results from using computationally more expensive, but potentially more accurate, Monte Carlo approximations. Moreover, the methodology is also applied to problems where the use of Monte Carlo approximations is computationally infeasible

    Optimal experimental design for predator–prey functional response experiments

    Get PDF
    Functional response models are important in understanding predator–prey interactions. The development of functional response methodology has progressed from mechanistic models to more statistically motivated models that can account for variance and the over-dispersion commonly seen in the datasets collected from functional response experiments. However, little information seems to be available for those wishing to prepare optimal parameter estimation designs for functional response experiments. It is worth noting that optimally designed experiments may require smaller sample sizes to achieve the same statistical outcomes as non-optimally designed experiments. In this paper, we develop a model-based approach to optimal experimental design for functional response experiments in the presence of parameter uncertainty (also known as a robust optimal design approach). Further, we develop and compare new utility functions which better focus on the statistical efficiency of the designs; these utilities are generally applicable for robust optimal design in other applications (not just in functional response). The methods are illustrated using a beta-binomial functional response model for two published datasets: an experiment involving the freshwater predator Notonecta glauca (an aquatic insect) preying on Asellus aquaticus (a small crustacean), and another experiment involving a ladybird beetle (Propylea quatuordecimpunctata L.) preying on the black bean aphid (Aphis fabae Scopoli). As a by-product, we also derive necessary quantities to perform optimal design for beta-binomial regression models, which may be useful in other applications

    Melanoma Cell Colony Expansion Parameters Revealed by Approximate Bayesian Computation

    Get PDF
    In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2-12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226-268 µm2h−1 , 311-351 µm2h−1 and 0.23-0.39, 0.32-0.61 for the experimental periods of 0-24 h and 24-48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ

    Modelling environmental drivers of black band disease outbreaks in populations of foliose corals in the genus Montipora

    Get PDF
    Seawater temperature anomalies associated with warming climate have been linked to increases in coral disease outbreaks that have contributed to coral reef declines globally. However, little is known about how seasonal scale variations in environmental factors influence disease dynamics at the level of individual coral colonies. In this study, we applied a multi-state Markov model (MSM) to investigate the dynamics of black band disease (BBD) developing from apparently healthy corals and/or a precursor-stage, termed `cyanobacterial patches' (CP), in relation to seasonal variation in light and seawater temperature at two reef sites around Pelorus Island in the central sector of the Great Barrier Reef. The model predicted that the proportion of colonies transitioning from BBD to Healthy states within three months was appro)dmately 57%, but 5.6% of BBD cases resulted in whole colony mortality. According to our modelling, healthy coral colonies were more susceptible to BBD during summer months when light levels were at their maxima and seawater temperatures were either rising 0r at their maxima. In contrast, CP mostly occurred during spring, when both light and seawater temperatures were rising. This suggests that environmental drivers for healthy coral colonies transitioning into a Cl' state are different from those driving transitions into BBD. Our model predicts that (1) the transition from healthy to CP state is best explained by increasing light, (2) the transition between Healthy to BBD occurs more frequently from early to late summer, (3) 20% of CP infected corals developed BBD, although light and temperature appeared to have limited impact on this state transition, and (4) the number of transitions from Healthy to BBD differed significantly between the two study sites, potentially reflecting differences in localised wave action regimes
    • …
    corecore